9 research outputs found

    Accessibility and inclusiveness of new information and communication technologies for disabled users and content creators in the Metaverse

    Get PDF
    PURPOSE: The study critically reassesses existing Metaverse concepts and proposes a novel framework for inclusiveness of physically disabled artists. The purpose is to enable and inspire physically disabled users and content creators to participate in the evolving concept of the Metaverse. The article also highlights the need for standards and regulations governing the inclusion of people with disabilities in Metaverse projects. MATERIALS AND METHODS: The study examines current information technologies and their relevance to the inclusion of physically disabled individuals in the Metaverse. We analyse existing Metaverse concepts, exploring emerging information technologies such as Virtual and Augmented Reality, and the Internet of Things. The emerging framework in the article is based on the active involvement of disabled creatives in the development of solutions for inclusivity. RESULTS: The review reveals that despite the proliferation of Blockchain Metaverse projects, the inclusion of physically disabled individuals in the Metaverse remains distant, with limited standards and regulations in place. The article proposes a concept of the Metaverse that leverages emerging technologies, to enable greater engagement of disabled creatives. This approach is designed to enhance inclusiveness in the Metaverse landscape. CONCLUSIONS: Active involvement of physically disabled individuals in the design and development of Metaverse platforms is crucial for promoting inclusivity. The framework for accessibility and inclusiveness in decentralised Metaverses provides a basis for the meaningful participation of disabled creatives. The article emphasises the importance of addressing the mechanisms for art production by individuals with disabilities in the emerging Metaverse landscape.IMPLICATIONS FOR REHABILITATIONThis article addresses a global challenge related to helping disabled people operate in the modern society, targeting new and emerging technologies, and enabling early understanding of required actions for inclusiveness of people with disabilities in the Metaverse.The increased use of advanced technologies (e.g., AI and IoT) in the Metaverse, amplified the importance of this research being conducted.The aggregate impact from this research for science and society is a more inclusive, and unbiassed Metaverses that are compliant with regulations on anti-disability discrimination. This is followed by the secondary values, related to increased technological opportunities from a breakthrough in designing new, more inclusive, and autonomous devices.The research study presents a new framework for integrating new technologies in existing Metaverses, resulting with a stronger accessibility and inclusiveness of the Metaverse. This creates a new understanding on how new technologies can be used for disability discrimination prevention and early understanding of disability requirements. We also highlighted normative constraints and the need for further reflection and weighing to avoid dystopian futures for the physically disabled in relation to the Metaverse

    SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

    Full text link
    Research into adversarial examples (AE) has developed rapidly, yet static adversarial patches are still the main technique for conducting attacks in the real world, despite being obvious, semi-permanent and unmodifiable once deployed. In this paper, we propose Short-Lived Adversarial Perturbations (SLAP), a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector. Attackers can project a specifically crafted adversarial perturbation onto a real-world object, transforming it into an AE. This allows the adversary greater control over the attack compared to adversarial patches: (i) projections can be dynamically turned on and off or modified at will, (ii) projections do not suffer from the locality constraint imposed by patches, making them harder to detect. We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks, focusing on the detection of stop signs. We conduct experiments in a variety of ambient light conditions, including outdoors, showing how in non-bright settings the proposed method generates AE that are extremely robust, causing misclassifications on state-of-the-art networks with up to 99% success rate for a variety of angles and distances. We also demostrate that SLAP-generated AE do not present detectable behaviours seen in adversarial patches and therefore bypass SentiNet, a physical AE detection method. We evaluate other defences including an adaptive defender using adversarial learning which is able to thwart the attack effectiveness up to 80% even in favourable attacker conditions.Comment: 13 pages, to be published in Usenix Security 2021, project page https://github.com/ssloxford/short-lived-adversarial-perturbation

    Accessibility and Inclusiveness of New Information and Communication Technologies for Disabled Users and Content Creators in the Metaverse

    Full text link
    Despite the proliferation of Blockchain Metaverse projects, the inclusion of physically disabled individuals in the Metaverse remains distant, with limited standards and regulations in place. However, the article proposes a concept of the Metaverse that leverages emerging technologies, such as Virtual and Augmented Reality, and the Internet of Things, to enable greater engagement of disabled creatives. This approach aims to enhance inclusiveness in the Metaverse landscape. Based on the findings, the paper concludes that the active involvement of physically disabled individuals in the design and development of Metaverse platforms is crucial for promoting inclusivity. The proposed framework for accessibility and inclusiveness in Virtual, Augmented, and Mixed realities of decentralised Metaverses provides a basis for the meaningful participation of disabled creatives. The article emphasises the importance of addressing the mechanisms for art production by individuals with disabilities in the emerging Metaverse landscape. Additionally, it highlights the need for further research and collaboration to establish standards and regulations that facilitate the inclusion of physically disabled individuals in Metaverse projects

    GazeLockPatterns: Comparing Authentication Using Gaze and Touch for Entering Lock Patterns

    Get PDF
    In this work, we present a comparison between Android’s lock patterns for mobile devices (TouchLockPatterns) and an implementation of lock patterns that uses gaze input (GazeLockPatterns). We report on results of a between subjects study (N=40) to show that for the same layout of authentication interface, people employ comparable strategies for pattern composition. We discuss the pros and cons of adapting lock patterns to gaze-based user interfaces. We conclude by opportunities for future work, such as using data collected during authentication for calibrating eye trackers

    The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research Directions

    Get PDF
    For the past 20 years, researchers have investigated the use of eye tracking in security applications. We present a holistic view on gaze-based security applications. In particular, we canvassed the literature and classify the utility of gaze in security applications into a) authentication, b) privacy protection, and c) gaze monitoring during security critical tasks. This allows us to chart several research directions, most importantly 1) conducting field studies of implicit and explicit gaze-based authentication due to recent advances in eye tracking, 2) research on gaze-based privacy protection and gaze monitoring in security critical tasks which are under-investigated yet very promising areas, and 3) understanding the privacy implications of pervasive eye tracking. We discuss the most promising opportunities and most pressing challenges of eye tracking for security that will shape research in gaze-based security applications for the next decade

    Security of mixed reality systems: authenticating users, devices, and data

    No full text
    Mixed reality devices continuously scan their environment in order to naturally blend the virtual objects with the user’s real-time view of their physical environment. Given the potential of these technologies to profoundly change how individuals interact with their environments, many of the largest technology companies are releasing their mixed reality systems and devoting significant resources towards achieving technological leadership in this field. However, due to the recency of the first commercially available mixed reality devices and their specific interaction channels, existing research has yet to provide practical proposals to achieve many of the core security objectives. Furthermore, given that these devices continuously analyze their environment using multiple front-facing cameras, when designing secure system it becomes necessary to reassess the commonly assumed threat models. In this thesis, we aim to bridge this gap by focusing on secure authentication on mixed reality headsets. Taking into account the stronger assumed adversary models and the interface capabilities of existing mixed reality devices, we propose methods for user and device authentication, as well as show how these devices can be used to secure one’s interactions with legacy systems. Considering that mixed reality headsets are starting to support gaze tracking, in this thesis we propose, build a prototype and experimentally evaluate the use of reflexive eye movements as a biometric modality that is well suited as an authentication method on MR headsets. As an added benefit, the reflexiveness and predictability of one’s eye movement responses makes it possible to incorporate the biometric measurements into challenge-response protocols. This allows the system to prevent replay attacks, one of the most common attack vectors on biometrics. Furthermore, given the many multi-user applications of mixed reality technologies that rely on direct communication between users’ devices, in this thesis we research secure and usable methods to mixed reality headsets. We propose a practical pairing protocol, implement a system prototype using two commercially available mixed reality headsets and evaluate its security and usability. Finally, we show that front-facing cameras of mixed reality headsets can also serve as the means of securing legacy electronic systems. We therefore build and evaluate a prototype of a system that uses a trusted device with video capture and analysis capabilities to authenticate the data that the user inputs when using a potentially compromised local client to communicate with a remote server.</p

    Inferring user height and improving impersonation attacks in mobile payments using a smartwatch

    No full text
    In this paper, we show that as a user makes mobile payments with a smartwatch, the height of the user can be inferred purely from inertial sensor data captured on the watch (with R 2 scores of up to 0.77). Besides unwanted information exposure, we also show that users of a similar height are more difficult to distinguish between in terms of their tap gesture data and that an attacker who chooses a victim of a similar height can improve the success chance of impersonation (by increasing the false acceptance rate by up to 20.6%)

    WatchAuth: user authentication and intent recognition in mobile payments using a smartwatch

    No full text
    In this paper, we show that the tap gesture, performed when a user ‘taps’ a smartwatch onto an NFC-enabled terminal to make a payment, is a biometric capable of implicitly authenticating the user and simultaneously recognising intent-to-pay. The proposed system can be deployed purely in software on the watch without requiring updates to payment terminals. It is agnostic to terminal type and position and the intent recognition portion does not require any training data from the user. To validate the system, we conduct a user study (n=16) to collect wrist motion data from users as they interact with payment terminals and to collect long-term data from a subset of them (n=9) as they perform daily activities. Based on this data, we identify optimum gesture parameters and develop authentication and intent recognition models, for which we achieve EERs of 0.08 and 0.04, respectively
    corecore